Java 3D API Specification
A P P E N D I X ![]() |
Equations |
THIS appendix contains the Java 3D equations for fog, lighting, sound, and texture mapping. Many of the equations use the following symbols:
· Multiplication Function operator for Sound Equations,
Dot product for all other EquationsE.1
Fog Equations
To be supplied.
E.2
Lighting Equations
The ideal lighting equation is as follows:
![]()
(Eq1)
![]()
(Eq2)
![]()
(Eq3)
Note: If (LiN)
0, then diffi and speci are set to 0.
![]()
(Eq4)
Note: For directional lights, atteni is set to 1.
![]()
(Eq5)
Note: If the vertex is outside the spotlight cone, as defined by the cutoff angle, spoti is set to 0. For directional and point lights, spoti is set to 1.
This is a subset of OpenGL in that the Java 3D ambient and directional lights are not attenuated and only ambient lights contribute to ambient lighting.
The parameters used in the lighting equation are as follows:
E = Eye vector Ma = Material ambient color Md = Material diffuse color Me = Material emissive color Ms = Material specular color N = Vertex normal shin = Material shininess The per-light values are as follows:
Fallbacks and Approximations
1. An implementation may approximate the specular function using a different power function that produces a similar specular highlight. For example, the PHIGS+ lighting model specifies that the reflection vector (the light vector reflected about the vertex normal) is dotted with the eye vector, and that this dot product is raised to the specular power. An implementation that uses such a model should map the shininess into an exponent that most closely matches the affect produced by the ideal equation.
2. Implementations that do not have a separate ambient and diffuse color may fall back to using an ambient intensity as a percentage of the diffuse color. This ambient intensity should be calculated using the NTSC luminance equation:
I = 0.30 · Red + 0.59 · Green + 0.11 · Blue
(Eq6)
E.3
Sound Equations
There are different sets of sound equations, depending on the use of headphones or speakers.
E.3.1
Headphone Playback Equations
For each sound source, Java 3D calculates a separate left and right output signal. Each left and right sound image is made up of the an interaural intensity and an interaural delay. The calculation results are a set of direct and indirect (delayed) sound signals mixed together when sent to the audio playback system left and right transducers.
E.3.1.1
Interaural Time Difference (Delay)
For each point and cone sound source node, the left and right output signals are delayed based on the location of the sound and the orientation of the head. This time difference between these two signals is called the interaural time difference (ITD). This time delay of a particular sound reaching the far ear is affected by the shadow of the listener's head. Java 3D uses an approximation of the ITD using a spherical head model. The interaural path difference is calculated based on the following cases:
1. For sound source "near" parallel sound incidence (Dh > De/2) - See Figure E-1:
![]()
Figure E-1
Distance Source Nearly Parallel Sound Incidence
2. For sources close to the head with the signal reaching one ear by a direct path (sin
> (De/2Dh), and thus -1
De/2Dh
1 (See Figure E-2):
![]()
Figure E-2
Source Near Head - Signal to One Ear is Indirect
3. For sound source very close to the head so that the signal reaches both ears by indirect paths around the head (sin
![]()
De/2Dh) - See Figure E-3:
![]()
Figure E-3
Source Very Close to Head - Signals to Both Ears are Indirect
The time from the sound source to the closest ear is Ec/S, and the time from the sound source to the farthest ear is Ef/S, where S is the region's speed of sound.
If sound is closest to left ear then:
ITDl = Ec/S, and ITDr = Ef/S
(Eq7)
If sound is closest to right ear then
ITDl = Ef/S, and ITDr = Ec/S
(Eq8)
The parameters used in the ITD equations are as follows:
E.3.1.2
Interaural Intensity (Gain) Difference
For each active and playing Point and ConeSound source, s, separate calculations for the left and right signal (based on which ear is closest and which is farthest to the source) are combined with non-spatialized BackgroundSound to create a stereo sound image.
![]()
(Eq9)
Note: For BackgroundSound sources ITD() is an identity function so there is no delay applied to the sample for these sources.
![]()
(Eq10)
Note: For BackgroundSound sources Gd() = Ga() = 1.0. For PointSound sources Ga() = 1.0.
![]()
(Eq11)
Note: For BackgroundSound sources Fd() and Fa() are identity functions. For PointSound sources Fa() is an identity function.
If the Sound source is on the right side of the head, Ec is used for left G and F calculations and Ef is used for right. Conversely, if the Sound source is on the left side of the head, Ef is used for left calculations and Ec is used for right.
E.3.1.3
Attenuation
For the vector from the sound source position through the listener's position, intersection points on the spheres, defined by the distance attenuation array, define both the index k within the distanceAttenuation arrays from which distance gain scale factors are acquired, and the distances, d1 = d[k] and d2 = d[k+1] from the sound position to these intersection points.
![]()
Figure E-4
ConeSound with a Single Distance Gain Attenuation Array
For the vector from the sound source position through the listener's position, intersection points on the two ellipses closest to the listener (points A and B) define both the index k within the front and back distanceAttenuation arrays from which distance gain scale factors are acquired, and the distances, d1 and d2, from the sound position to these intesection points.
![]()
Figure E-5
ConeSound with Two Distance Attenuation Arrays
The equation for the distance gain is:
![]()
(Eq12)
Angular Attenuation for both the spherical and elliptical cone sounds is identical. The angular distances in the attenuation array closest to
is found and define the index k into the angular attenuation array elements. The equation for the angular gain is:
![]()
(Eq13)
The parameters used in the IID equations are as follows:
E.3.1.4
Doppler Effect Equations
The frequency of sound waves emanating from the source are lowered based on the speed of the source in relation to the listener, and the sound wave length, as follows:
S(f)' = S(f) - [Ds · (Dv/W(f,Dh)]
(Eq14)
The parameters used in the Doppler effect equations are as follows:
E.3.1.5
Reverberation Equations
The overall reverberant sounds, used to give the impression of the aural space the active/enabled source sources are playing in, is added to the stereo sound image output from equation (Eq9).
![]()
(Eq15)
Reverberation for each sound is approximated in the following:
![]()
(Eq16)
Note that the reverberation calculation outputs the same image to both left and right output signals (thus there is a single mono calculation for each sound reverberated). Correct first-order (early) reflections, based on the location of the sound source, the listener, and the active AuralAttribute bounds, are not required for this version of Java 3D. Approximations based on the reverberation delay time, either suppled by the application or calculated as the average delay time within the selected AuralAttribute application region, will be used
The feedback loop is repeated until AuralAttribute reverberation feedback loop count is reached or Gri
0.000976 (effective zero amplitude, -60dB, using the measure of -6dB drop for every doubling of distance).
Fallbacks and Approximations
1. Reducing the number of feedback loop repeated while still maintaining the overall impression of the environment. For example, if -10db were used as the drop in gain for every doubling of distance, a scale factor of 0.015625 could be used as the effective zero amplitude, which can be reached in only 15loop iterations (rather than the 25 need to reach 0.000976).
2. Using pre-programmed "room" reverberation algorithms that allow selection of a fixed set of "reverberation types" (i.e., large hall, small living room), which have implied reflection coefficients, delay times, and feedback loop durations.
The parameters used in the Reverberation equations are as follows:
E.3.1.6
Low-pass Filtering
An N-pole low-pass filter may be used to perform the simple angular and distance filtering defined in this version of Java 3D. These simple low-pass filters are meant only as an approximation for full, FIR filters (to be added in some future version of Java 3D).
Fallbacks and Approximations
1. If more than one low-pass filter is to be applied to the sound source (i.e., both an angular and distance filter is applied to a ConeSound source) it is only necessary to use a single filter, specifically the one that has the lowest cutoff frequency.
2. There is no requirement to support anything higher than very simple two-pole filtering. Any type of multi-pole low-pass filter can be used. If higher N-pole or compound filtering are available on the device that sound rendering is being performed on, use of these are encouraged, but not required.
E.3.2
Speaker Playback Equations
Different speaker playback equations are used, depending on whether the system uses monaural or stereo speakers.
E.3.2.1
Monaural Speaker Output
The equations for headphone playback need only be modified to output a single signal, rather than two signals for left and right transducers. Although there is only one speaker, distance and filter attenuation, Doppler effect, elevation and front and back cues can be distinguished by the listener and should be included in the sound image generated.
E.3.2.2
Stereo Speaker Output
In a two speaker playback system, the signal from one speaker is actually heard by both ears and this affects the spectral balance and interaural intensity and time differences heard by each of the listener's ears. Cross-talk cancellation must be performed on the right and left signal to compensate for the delayed attenuated signal heard by the ear opposite the speaker. Thus a delayed attenuated signal for each of the stereo signals must be added to the output from the equations for headphone playback.
The equations for stereo speaker playback assume that the two speakers are placed symmetrically about the listener (at the same off-axis angle from the viewing axis at an equal distance from the center of the listener's head).
I(t)l' = I(t)l + [D(t)
[G(P,
) · I(t)r] ]
(Eq17)
I(t)r' = I(t)r + [D(t)
[G(P,
) · I(t)l] ]
(Eq18)
The parameters used in the cross-talk equations, expanding on the terms used for the equations for headphone playback, are as follows:
E.4
Texture Mapping Equations
To be supplied.
![]()
![]()
![]()
Java 3D API Specification
Copyright © 1997, Sun Microsystems, Inc. All rights reserved.